Goto

Collaborating Authors

 language-driven ordering alignment


Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification

Neural Information Processing Systems

We present a novel language-driven ordering alignment method for ordinal classification. The labels in ordinal classification contain additional ordering relations, making them prone to overfitting when relying solely on training data. Recent developments in pre-trained vision-language models inspire us to leverage the rich ordinal priors in human language by converting the original task into a vision-language alignment task. Consequently, we propose L2RCLIP, which fully utilizes the language priors from two perspectives. First, we introduce a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts. It employs token-level attention with residual-style prompt blending in the word embedding space. Second, to further incorporate language priors, we revisit the approximate bound optimization of vanilla cross-entropy loss and restructure it within the cross-modal embedding space. Consequently, we propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment. Extensive experiments on three ordinal classification tasks, including facial age estimation, historical color image (HCI) classification, and aesthetic assessment demonstrate its promising performance.


Supplementary Materials of Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification

Neural Information Processing Systems

The specifics of our experimental settings are subsequently outlined in Section 1.2. Additional Ablation Study. Figure 1 presents the embedding spaces corresponding to various Feature demarcations across different ranks are ambiguous, displaying considerable overlap. This underscores the criticality of their synergistic implementation. We also explore the role of different batchsize settings. We report the result in Table 1.


Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification

Neural Information Processing Systems

We present a novel language-driven ordering alignment method for ordinal classification. The labels in ordinal classification contain additional ordering relations, making them prone to overfitting when relying solely on training data. Recent developments in pre-trained vision-language models inspire us to leverage the rich ordinal priors in human language by converting the original task into a vision-language alignment task. Consequently, we propose L2RCLIP, which fully utilizes the language priors from two perspectives. First, we introduce a complementary prompt tuning technique called RankFormer, designed to enhance the ordering relation of original rank prompts.